Discriminative Regularization: A New Classifier Learning Method

نویسندگان

  • Hui Xue
  • Songcan Chen
  • Qiang Yang
چکیده

Regularization involves a large family of the state-of-the-art techniques in classifier learning. However, since traditional regularization methods essentially derive from ill-posed multivariate functional fitting problems which can be viewed as a kind of regression, in classifier design, they usually give more concerns to the smoothness of the classifier, and do not sufficiently use the prior knowledge of given samples. Actually, due to the characteristics of classification, the classifier is not always necessarily smooth anywhere, especially near the discriminant boundaries between classes. Radial Basis Function Networks (RBFNs) and Support vector machines (SVMs), as two most famous ones in the regularization family, have been aware of the importance of the prior information to some extent. They focus on either the intra-class or the inter-class information respectively. In this paper, we present a novel regularization method – Discriminative Regularization (DR), which provides a general way to incorporate the prior knowledge for classification. Through introducing the prior information into the regularization term, DR aims to minimize the empirical loss between the desired and actual outputs, as well as maximize the inter-class separability and minimize the intra-class compactness in the output space simultaneously. Furthermore, by embedding equality constraints in the formulation, the solution of DR can follow from solving a set of linear equations. The * Corresponding author: Tel: +86-25-84896481 Ext. 12106; Fax: +86-25-84498069; E-mail: [email protected] (S. Chen), [email protected] (H. Xue) and [email protected] (Q. Yang) classification experiments show the superiority of our proposed DR.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized margin-based conditional log-likelihood loss for prototype learning

The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is base...

متن کامل

Semi-supervised Max-margin Topic Model with Manifold Posterior Regularization

Supervised topic models leverage label information to learn discriminative latent topic representations. As collecting a fully labeled dataset is often time-consuming, semi-supervised learning is of high interest. In this paper, we present an effective semi-supervised max-margin topic model by naturally introducing manifold posterior regularization to a regularized Bayesian topic model, named L...

متن کامل

Discriminality-driven regularization framework for indefinite kernel machine

Indefinite kernel machines have attracted more and more interests in machine learning due to their better empirical classification performance than the common positive definite kernel machines in many applications. A key to implement effective kernel machine is how to use prior knowledge as sufficiently as possible to guide the appropriate construction of the kernels. However, most of existing ...

متن کامل

Support vector machine with hypergraph-based pairwise constraints

Although support vector machine (SVM) has become a powerful tool for pattern classification and regression, a major disadvantage is it fails to exploit the underlying correlation between any pair of data points as much as possible. Inspired by the modified pairwise constraints trick, in this paper, we propose a novel classifier termed as support vector machine with hypergraph-based pairwise con...

متن کامل

L1/Lp Regularization of Differences

In this paper, we introduce L1/Lp regularization of differences as a new regularization approach that can directly regularize models such as the naive Bayes classifier and (autoregressive) hidden Markov models. An algorithm is developed that selects values of the regularization parameter based on a derived stability condition. for the regularized naive Bayes classifier, we show that the method ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008